Goto

Collaborating Authors

 us regulator


US regulator says AI scanner 'deceived' users after BBC story

BBC News

"The FTC has been clear that claims about technology – including artificial intelligence – need to be backed up", said Samuel Levine, Director of the Bureau of Consumer Protection. Evolv Technology's mission is to replace metal detectors with AI weapons scanners. It claims to do this with artificial intelligence, which can actively detect concealed weapons like bombs, knives and guns. The FTC's complaint alleges the company deceptively advertised its scanners would detect "all weapons". In 2022 the BBC outlined some of the impressive claims Evolv's then CEO had made about the technology.

  Country: North America > United States > New York (0.07)
  Industry:

Tesla Autopilot feature was involved in 13 fatal crashes, US regulator says

The Guardian

US auto-safety regulators said on Friday that their investigation into Tesla's Autopilot had identified at least 13 fatal crashes in which the feature had been involved. The investigation also found the electric carmaker's claims did not match up with reality. The National Highway Traffic Safety Administration (NHTSA) disclosed on Friday that during its three-year Autopilot safety investigation, which it launched in August 2021, it identified at least 13 Tesla crashes involving one or more death, and many more involving serious injuries, in which "foreseeable driver misuse of the system played an apparent role". It also found evidence that "Tesla's weak driver engagement system was not appropriate for Autopilot's permissive operating capabilities", which resulted in a "critical safety gap". The NHTSA also raised concerns that Tesla's Autopilot name "may lead drivers to believe that the automation has greater capabilities than it does and invite drivers to overly trust the automation".


Tesla Autopilot head Andrej Karpathy leaves as company faces renewed crash probes

Daily Mail - Science & tech

Tesla Director of Artificial Intelligence and Autopilot Andrej Karpathy is leaving the company at a critical time - as it faces renewed probes over crashes and growing scrutiny. Tesla's head of artificial intelligence and autopilot Andrej Karpathy, pictured above at a conference, is leaving the company at a critical time'It's been a great pleasure to help Tesla towards its goals over the last 5 years and a difficult decision to part ways. In that time, Autopilot graduated from lane keeping to city streets and I look forward to seeing the exceptionally strong Autopilot team continue that momentum,' he wrote on Twitter, noting that he has no plans for what's next. Tesla CEO Elon Musk replied to thank him for his work at the company. The leadership change comes at a challenging time, as Tesla faces renewed scrutiny from US regulators over crashes involving drivers who used Autopilot and works to expand the latest version of Full Self Driving (FSD) to a larger number of customers.


US Sues To Block Chipmaker Nvidia's $40 Bn Merger With UK's Arm

International Business Times

US regulators filed a lawsuit Thursday to block the $40-billion merger of graphics chip star Nvidia with mobile chip technology powerhouse Arm Ltd, fearing it would undermine competition. The move comes as US President Joe Biden strives to ramp up domestic chip production to ease American industry's reliance on imports. "The proposed vertical deal would give one of the largest chip companies control over the computing technology and designs that rival firms rely on to develop their own competing chips," the Federal Trade Commission said in a release, calling chips "critical infrastructure." The world faces a global shortage of semiconductors, choking production of a wide range of products including automobiles, sending new and used car prices surging. The FTC echoed concerns expressed about the merger by regulators in the United Kingdom, who recently ordered an in-depth probe of the take-over.


What is next for AI regulation?

#artificialintelligence

In September 2021, there was a panel at a ForHumanity conference, with senior guests from the US Equal Employment Opportunity Commission (EEOC), the US Government and Accountability Office, the European Commission and the UK Accreditation Service. The topic was AI-specific regulation, whether it is needed, the progress it is making and the implementation complexities. Paul Nemitz, from the European Commission, outlined the need for the proposed AI Act in the EU. Whilst GDPR regulates automated decision-making, it is focused on the use of personal data, rather than the technologies themselves. In Paul's opinion this leaves a gap, and the Act is expected to pass early in 2022.


For Tesla Probe, US Regulators Seek Data From 12 Automakers

International Business Times

The US highway safety watchdog asked 12 automakers Tuesday to provide data on their driver assistance systems as part of a preliminary investigation of Tesla, whose cars were involved in several accidents with first responder vehicles. The National Highway Traffic and Safety Administration seeks to conduct a benchmark analysis of vehicles whose models have the ability, under certain circumstances, to automatically control both the steering and the breaking or acceleration. NHTSA sent letters, dated September 13 and seen by AFP, to BMW, Ford, General Motors, Honda, Hyundai, Kia, Mercedes-Benz, Nissan, Stellantis, Subaru, Toyota and Volkswagen. The agency began its probe in August after documenting 11 Tesla accidents since early 2018 involving a car from the company founded by tech titan Elon Musk and emergency vehicles including police cruisers. The incidents included one fatal crash and seven that resulted in injuries to a total of 17 people, according to the NHTSA.


US regulators are seeking info on AI in banking, and that could be good news for banks

#artificialintelligence

A group of US banking regulators--including the Federal Reserve, the Federal Deposit Insurance Corporation (FDIC), and the Consumer Financial Protection Bureau (CFPB), among others--has issued a statement that they're seeking public input on the rising usage of AI by financial institutions (FIs), Reuters reports. They specified that they want feedback on "how financial institutions use AI in their activities, including fraud prevention, personalization of customer services, credit underwriting, and other operations." The request for information also seeks to understand whether any clarification from the regulatory agencies would help FIs use AI in a safe way that complies with laws and regulations. Regulatory scrutiny of AI in banking could add to the perceived risk that banks take on when they deploy the technology in their businesses. And any trepidation that banks may feel regarding the deployment of AI may only be heightened by the sense that government bodies have started paying close attention to the presence of AI in the banking industry. But the exploratory interest of regulatory agencies is also a major opportunity for banks and regulators to align on the benefits, pitfalls, and expectations for AI in the sector.


A Regulation Revolution In Financial Services

#artificialintelligence

If your professional interests take you to the crossroads of financial services, regulation, compliance, and digital - especially data analytics and machine learning - which altogether is known as regtech, you are in the right place. You are part of statistically small and very geek-oriented professional community, but you know this, and though you might choose not to admit this to strangers at this year's festive parties for fear of causing great pain by boredom, you are in good company with this Contributor and my interviewee. I first met Jo Ann Barefoot when I was chairing the U.K. Financial Conduct Authority (FCA) Industry Sandbox Consultation, where she provided excellent guidance and insights. Jo Ann is one of the most dedicated and busiest advocates of the regtech space on the planet and is truly outstanding in both her knowledge and passion in this area. She dedicates her time to a number of global bodies and initiatives related to regtech: she is a Senior Fellow Emerita at the Harvard Kennedy School Center for Business & Government, a Senior Advisor to the Omidyar network, sits on the fintech advisory committee for FINRA, is an Executive Board Member of the International RegTech Association (IRTA), is a member of the Milken Institute U.S. FinTech Advisory Committee, and chairs the boards of the Center for Financial Services Innovation and FinRegLab.


US regulators investigating Google's self driving car crash

AITopics Original Links

The top U.S. auto safety regulator said on Thursday the agency is seeking additional details of a recent crash of an Alphabet Google self-driving car in California. National Highway Traffic Safety Administration (NHTSA) chief Mark Rosekind told Reuters on the sidelines of an event on highway safety that the agency is collecting more information to get a'more detailed exploration of what exactly happened.' A Google self-driving car struck a municipal bus in Mountain View in a minor crash on Feb. 14, and the search engine firm said it bears'some responsibility' for the incident in what may be the first crash that was the fault of the self-driving vehicle. Footage recorded by cameras on the bus shows the Lexus SUV, which Google has outfitted with sensors and cameras that let it drive itself, edging into the path of the bus that was rolling by at about 15 mph. Here, it can be seen on the right of the image, next to the kerb. Neither the Google employee in the driver's seat -- who must be there under California law to take the wheel in an emergency -- nor the 16 people on the bus were injured.


Apple reveals autonomous vehicle ambitions in letter to US regulators

#artificialintelligence

Apple has publicly revealed its ambitions to play in the emerging market of self-driving vehicles with a policy recommendation letter to the National Highway Traffic Safety Administration. Precise details of Apple automotive products were not revealed in the letter. "Apple uses machine learning to make its products and services smarter, more intuitive, and more personal. The company is investing heavily in the study of machine learning and automation, and is excited about the potential of automated systems in many areas, including transportation." Written by Apple Director of Product Integrity Steve Kenner, the letter calls for policies that will bring about: a clear understanding of who is liable for problems that occur when cars drive themselves; the maintenance of users' privacy, cybersecurity and physical safety; and ensuring that the impact of self-driving cars on the public is as positive as can be.